Multiple Linear Regression - Model Building

Notes and in-class exercises

You can download the .qmd file for this activity here and open in R-studio. The rendered version is posted in the course website (Activities tab). I often experiment with the class activities (and see it in live!) and make updates, but I always post the final version before class starts. To be sure you have the most up-to-date copy, please download it once you’ve settled in before class begins.

Notes on Model Building

Learning goals

By the end of this lesson, you should be able to:

  • Explain when variables are redundant or multicollinear.
  • Relate redundancy and multicollinearity to coefficient estimates and \(R^2\).
  • Explain why adjusted \(R^2\) is preferable to multiple \(R^2\) when comparing models with different numbers of predictors.

Readings and videos

Today is a day to discover ideas, so no readings or videos to go through before class, but if you want to see today’s ideas presented in a different way, you can take a look at the following after class:

File organization: Save this file in the “Activities” subfolder of your “STAT155” folder.

Exercises

Today we’ll visit predictive research questions and some considerations when trying to build strong models. In particular, we’ll consider nuances and limitations to indiscriminately adding more predictors to our model. To explore these ideas, load the following data on penguins:

# Load packages & data
library(tidyverse)
data(penguins)
head(penguins)
##   species    island bill_len bill_dep flipper_len body_mass    sex year
## 1  Adelie Torgersen     39.1     18.7         181      3750   male 2007
## 2  Adelie Torgersen     39.5     17.4         186      3800 female 2007
## 3  Adelie Torgersen     40.3     18.0         195      3250 female 2007
## 4  Adelie Torgersen       NA       NA          NA        NA   <NA> 2007
## 5  Adelie Torgersen     36.7     19.3         193      3450 female 2007
## 6  Adelie Torgersen     39.3     20.6         190      3650   male 2007

You can find a codebook for these data by typing ?penguins in your console (not Rmd). Our goal throughout will be to build a model of bill lengths (in mm):

(Art by @allison_horst)

To get started, the flipper_len variable currently measures flipper length in mm. Let’s create and save a new variable named flipper_len_cm which measures flipper length in cm. NOTE: There are 10mm in a cm.

penguins <- penguins %>% 
    mutate(flipper_len_cm = flipper_len / 10)
head(penguins)
##   species    island bill_len bill_dep flipper_len body_mass    sex year
## 1  Adelie Torgersen     39.1     18.7         181      3750   male 2007
## 2  Adelie Torgersen     39.5     17.4         186      3800 female 2007
## 3  Adelie Torgersen     40.3     18.0         195      3250 female 2007
## 4  Adelie Torgersen       NA       NA          NA        NA   <NA> 2007
## 5  Adelie Torgersen     36.7     19.3         193      3450 female 2007
## 6  Adelie Torgersen     39.3     20.6         190      3650   male 2007
##   flipper_len_cm
## 1           18.1
## 2           18.6
## 3           19.5
## 4             NA
## 5           19.3
## 6           19.0

Run the code chunk below to build a bunch of models that you’ll be exploring in the exercises:

penguin_model_1 <- lm(bill_len ~ flipper_len, penguins)
penguin_model_2 <- lm(bill_len ~ flipper_len_cm, penguins)
penguin_model_3 <- lm(bill_len ~ flipper_len + flipper_len_cm, penguins)
penguin_model_4 <- lm(bill_len ~ body_mass, penguins)
penguin_model_5 <- lm(bill_len ~ flipper_len + body_mass, penguins)

Exercise 1: Modeling bill length by flipper length

What can a penguin’s flipper (arm) length tell us about their bill length? To answer this question, we’ll consider 3 of our models:

model predictors
penguin_model_1 flipper_len
penguin_model_2 flipper_len_cm
penguin_model_3 flipper_len + flipper_len_cm

Plots of the first two models are below:

ggplot(penguins, aes(y = bill_len, x = flipper_len)) + 
    geom_point() +
    geom_smooth(method = "lm", se = FALSE)


ggplot(penguins, aes(y = bill_len, x = flipper_len_cm)) + 
    geom_point() +
    geom_smooth(method = "lm", se = FALSE)

  1. Before examining the model summaries, check your intuition. Do you think the penguin_model_2 R-squared will be less than, equal to, or more than that of penguin_model_1? Similarly, how do you think the penguin_model_3 R-squared will compare to that of penguin_model_1?

  2. Check your intuition: Examine the R-squared values for the three penguin models and summarize how these compare.

summary(penguin_model_1)$r.squared
## [1] 0.430574
summary(penguin_model_2)$r.squared
## [1] 0.430574
summary(penguin_model_3)$r.squared
## [1] 0.430574
  1. Explain why your observation in part b makes sense. Support your reasoning with a plot of just the 2 predictors: flipper_len vs flipper_len.
ggplot(penguins, aes(x = flipper_len, y = flipper_len_cm)) +
    geom_point()

  1. OPTIONAL challenge: In summary(penguin_model_3), the flipper_len coefficient is NA. Explain why this makes sense. HINT: Thinking about what you learned about controlling for covariates, why wouldn’t it make sense to interpret this coefficient? BONUS: For those of you that have taken MATH 236, this has to do with matrices that are not of full rank!

Exercise 2: Incorporating body_mass

In this exercise you’ll consider 3 models of bill_len:

model predictors
penguin_model_1 flipper_len
penguin_model_4 body_mass
penguin_model_5 flipper_len + body_mass
summary(penguin_model_1)$r.squared
## [1] 0.430574
summary(penguin_model_4)$r.squared
## [1] 0.3541557
  1. Which is the better predictor of bill_len: flipper_len or body_mass? Provide some numerical evidence.

  2. penguin_model_5 incorporates both flipper_len and body_mass as predictors. Before examining a model summary, ask your gut: Will the penguin_model_5 R-squared be close to 0.35, close to 0.43, or greater than 0.6?

  3. Check your intuition. Report the penguin_model_5 R-squared and summarize how this compares to that of penguin_model_1 and penguin_model_4.

summary(penguin_model_5)$r.squared
## [1] 0.4328544
  1. Explain why your observation in part c makes sense. Support your reasoning with a plot of the 2 predictors: flipper_len vs body_mass.
ggplot(penguins, aes(x = flipper_len, y = body_mass)) +
    geom_point()

Exercise 3: Redundancy and Multicollinearity

The exercises above have illustrated special phenomena in multivariate modeling:

  • two predictors are redundant if they contain the same exact information
  • two predictors are multicollinear if they are strongly associated (they contain very similar information) but are not completely redundant.

Recall that we examined 5 models:

model predictors
penguin_model_1 flipper_len
penguin_model_2 flipper_len_cm
penguin_model_3 flipper_len + flipper_len_cm
penguin_model_4 body_mass
penguin_model_5 flipper_len + body_mass
  1. Which model had redundant predictors and which predictors were these?
  2. Which model had multicollinear predictors and which predictors were these?
  3. In general, what happens to the R-squared value if we add a redundant predictor to a model: will it decrease, stay the same, increase by a small amount, or increase by a significant amount?
  4. Similarly, what happens to the R-squared value if we add a multicollinear predictor to a model: will it decrease, stay the same, increase by a small amount, or increase by a significant amount?

Exercise 4: Considerations for strong models

Let’s dive deeper into important considerations when building a strong model. We’ll use a subset of the penguins data for exploring these ideas.

# For illustration purposes only, take a sample of 10 penguins.
# We'll discuss this code later in the course!
set.seed(155)
penguins_small <- sample_n(penguins, size = 10) %>%
  mutate(flipper_len = jitter(flipper_len))

Consider 3 models of bill length:

# A model with one predictor (flipper_len)
poly_mod_1 <- lm(bill_len ~ flipper_len, penguins_small)

# A model with two predictors (flipper_len and flipper_len^2)
poly_mod_2 <- lm(bill_len ~ poly(flipper_len, 2), penguins_small)

# A model with nine predictors (flipper_len, flipper_len^2, ... on up to flipper_len^9)
poly_mod_9 <- lm(bill_len ~ poly(flipper_len, 9), penguins_small)
  1. Before doing any analysis, which of the three models do you think will be best?
  2. Calculate the R-squared values of these 3 models. Which model do you think is best?
summary(poly_mod_1)$r.squared
## [1] 0.7341412
summary(poly_mod_2)$r.squared
## [1] 0.7630516
summary(poly_mod_9)$r.squared
## [1] 1
  1. Check out plots depicting the relationship estimated by these 3 models. Which model do you think is best?
# A plot of model 1
ggplot(penguins_small, aes(y = bill_len, x = flipper_len)) + 
    geom_point() + 
    geom_smooth(method = "lm", se = FALSE)

# A plot of model 2
ggplot(penguins_small, aes(y = bill_len, x = flipper_len)) + 
    geom_point() + 
    geom_smooth(method = "lm", formula = y ~ poly(x, 2), se = FALSE)

# A plot of model 9
ggplot(penguins_small, aes(y = bill_len, x = flipper_len)) + 
    geom_point() + 
    geom_smooth(method = "lm", formula = y ~ poly(x, 9), se = FALSE)

Exercise 5: Reflecting on these investigations

  1. List 3 of your favorite foods. Now imagine making a dish that combines all of these foods. Do you think it would taste good?
  2. Too many good things doesn’t make necessarily make a better thing. Model 9 demonstrates that it’s always possible to get a perfect R-squared of 1, but there are drawbacks to putting more and more predictors into our model. Answer the following about model 9:
    • How easy would it be to interpret this model?
    • Would you say that this model captures the general trend of the relationship between bill_len and flipper_len?
    • How well do you think this model would generalize to penguins that were not included in the penguins_small sample? For example, would you expect these new penguins to fall on the wiggly model 9 curve?

Exercise 6: Overfitting

Model 9 provides an example of a model that is overfit to our sample data. That is, it picks up the tiny details of our data at the cost of losing the more general trends of the relationship of interest. Check out the following xkcd comic. Which plot pokes fun at overfitting?

Some other goodies:

Exercise 7: Questioning R-squared

Zooming out, explain some limitations of relying on R-squared to measure the strength / usefulness of a model.

Exercise 8: Adjusted R-squared

We’ve seen that, unless a predictor is redundant with another, R-squared will increase. Even if that predictor is strongly multicollinear with another. Even if that predictor isn’t a good predictor! Thus if we only look at R-squared we might get overly greedy. We can check our greedy impulses a few ways. We take a more in depth approach in STAT 253, but one quick alternative is reported right in our model summary() tables. Adjusted R-squared includes a penalty for incorporating more and more predictors. Mathematically (where \(n\) is the sample size and \(p\) is the number of non-intercept coefficients):

\[ \text{Adjusted } R^2 = 1 - (1 - R^2) \left( \frac{n-1}{n-p-1} \right) \]

Thus unlike R-squared, Adjusted R-squared can decrease when the information that a predictor contributes to a model isn’t enough to offset the complexity it adds to that model. Consider two models:

example_1 <- lm(bill_len ~ species, penguins)
example_2 <- lm(bill_len ~ species + island, penguins)
summary(example_1)
## 
## Call:
## lm(formula = bill_len ~ species, data = penguins)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -7.9338 -2.2049  0.0086  2.0662 12.0951 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       38.7914     0.2409  161.05   <2e-16 ***
## speciesChinstrap  10.0424     0.4323   23.23   <2e-16 ***
## speciesGentoo      8.7135     0.3595   24.24   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.96 on 339 degrees of freedom
##   (2 observations deleted due to missingness)
## Multiple R-squared:  0.7078, Adjusted R-squared:  0.7061 
## F-statistic: 410.6 on 2 and 339 DF,  p-value: < 2.2e-16
summary(example_2)
## 
## Call:
## lm(formula = bill_len ~ species + island, data = penguins)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -7.9338 -2.2049 -0.0049  2.0951 12.0951 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)      38.97500    0.44697  87.198   <2e-16 ***
## speciesChinstrap 10.33204    0.53502  19.312   <2e-16 ***
## speciesGentoo     8.52988    0.52082  16.378   <2e-16 ***
## islandDream      -0.47321    0.59729  -0.792    0.429    
## islandTorgersen  -0.02402    0.61004  -0.039    0.969    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.965 on 337 degrees of freedom
##   (2 observations deleted due to missingness)
## Multiple R-squared:  0.7085, Adjusted R-squared:  0.7051 
## F-statistic: 204.8 on 4 and 337 DF,  p-value: < 2.2e-16
  1. Check out the summaries for the 2 example models. In general, how does a model’s Adjusted R-squared compare to the R-squared? Is it greater, less than, or equal to the R-squared?
  2. How did the R-squared change from example model 1 to model 2? How did the Adjusted R-squared change?
  3. Explain what it is about island that resulted in a decreased Adjusted R-squared. Note: it’s not necessarily the case that island is a bad predictor on its own!





Solutions

Exercise 1: Modeling bill length by flipper length

model predictors
penguin_model_1 flipper_len
penguin_model_2 flipper_len
penguin_model_3 flipper_len + flipper_len

Plots of the first two models are below:

ggplot(penguins, aes(y = bill_len, x = flipper_len)) + 
    geom_point() +
    geom_smooth(method = "lm", se = FALSE)


ggplot(penguins, aes(y = bill_len, x = flipper_len)) + 
    geom_point() +
    geom_smooth(method = "lm", se = FALSE)

  1. Your intuition–answers will vary

  2. The R-squared values are all the same!

summary(penguin_model_1)$r.squared
## [1] 0.430574
summary(penguin_model_2)$r.squared
## [1] 0.430574
summary(penguin_model_3)$r.squared
## [1] 0.430574
  1. The two variables are perfectly linearly correlated—they contain exactly the same information!
ggplot(penguins, aes(x = flipper_len, y = flipper_len)) +
    geom_point()

  1. An NA means that the coefficient couldn’t be estimated. In penguin_model_3, the interpretation of the flipper_len coefficient is the average change in bill length per centimeter change in flipper length, while holding flipper length in millimeters constant…this is impossible! We can’t hold flipper length in millimeters fixed while varying flipper length in centimeters—if one changes the other must. (In linear algebra terms, the matrix underlying our data is not of full rank.)

Exercise 2: Incorporating body_mass

In this exercise you’ll consider 3 models of bill_len:

model predictors
penguin_model_1 flipper_len
penguin_model_4 body_mass
penguin_model_5 flipper_len + body_mass
  1. flipper_len is a better predictor than body_mass because penguin_model_1 has an R-squared value of 0.4306 vs 0.3542 for penguin_model_4.

  2. Intuition check–answers will vary

  3. R-squared is for penguin_model_5 which is slightly higher than that of penguin_model_1 and penguin_model_4.

d.flipper_len and body_mass are positively correlated and thus contain related information, but not completely redundant information. There’s some information in flipper length in explaining bill length that isn’t captured by body mass, and vice-versa.

ggplot(penguins, aes(x = flipper_len, y = body_mass)) +
    geom_point()

Exercise 3: Redundancy and Multicollinearity

model predictors
penguin_model_1 flipper_len
penguin_model_2 flipper_len
penguin_model_3 flipper_len + flipper_len
penguin_model_4 body_mass
penguin_model_5 flipper_len + body_mass
  1. penguin_model_3 had redundant predictors: `flipper_len and flipper_len
  2. penguin_model_5 had multicollinear predictors: flipper_len and body_mass were related but not redundant
  3. R-squared will stay the same if we add a redundant predictor to a model.
  4. R-squared will increase by a small amount if we add a multicollinear predictor to a model.

Exercise 4: Considerations for strong models

  1. A gut check! Answers will vary
  2. Based on R-squared: recall that R-squared is interpreted as the proportion of variation in the outcome that our model explains. It would seem that higher is better, so poly_mod_9 might seem to be the best. BUT we’ll see where this reasoning is flawed soon!
  3. Based on the plots: Answers will vary

Exercise 5: Reflecting on these investigations

  1. salmon, chocolate, samosas. Together? Yuck!
  2. Regarding model 9:
    • NOT easy to interpret.
    • NO. It’s much more wiggly than the general trend.
    • NOT WELL. It is too tailored to our data.

Exercise 6: Overfitting

The bottom left plot pokes fun at overfitting.

Exercise 7: Questioning R-squared

It measures how well our model explains / predicts our sample data, not how well it explains / predicts the broader population. It also has the feature that any non-redundant predictor added to a model will increase the R-squared.

Exercise 8: Adjusted R-squared

  1. Adjusted R-squared is less than the R-squared
  2. From model 1 to 2, R-squared increased and Adjusted R-squared decreased.
  3. island didn’t provide useful information about bill length beyond what was already provided by species.